Causal deep learning (CDL) is a new and important research area in the larger field of machine learning. With CDL, researchers aim to structure and encode causal knowledge in the extremely flexible representation space of deep learning models. Doing so will lead to more informed, robust, and general predictions and inference -- which is important! However, CDL is still in its infancy. For example, it is not clear how we ought to compare different methods as they are so different in their output, the way they encode causal knowledge, or even how they represent this knowledge. This is a living paper that categorises methods in causal deep learning beyond Pearl's ladder of causation. We refine the rungs in Pearl's ladder, while also adding a separate dimension that categorises the parametric assumptions of both input and representation, arriving at the map of causal deep learning. Our map covers machine learning disciplines such as supervised learning, reinforcement learning, generative modelling and beyond. Our paradigm is a tool which helps researchers to: find benchmarks, compare methods, and most importantly: identify research gaps. With this work we aim to structure the avalanche of papers being published on causal deep learning. While papers on the topic are being published daily, our map remains fixed. We open-source our map for others to use as they see fit: perhaps to offer guidance in a related works section, or to better highlight the contribution of their paper.
translated by 谷歌翻译
It is important to guarantee that machine learning algorithms deployed in the real world do not result in unfairness or unintended social consequences. Fair ML has largely focused on the protection of single attributes in the simpler setting where both attributes and target outcomes are binary. However, the practical application in many a real-world problem entails the simultaneous protection of multiple sensitive attributes, which are often not simply binary, but continuous or categorical. To address this more challenging task, we introduce FairCOCCO, a fairness measure built on cross-covariance operators on reproducing kernel Hilbert Spaces. This leads to two practical tools: first, the FairCOCCO Score, a normalised metric that can quantify fairness in settings with single or multiple sensitive attributes of arbitrary type; and second, a subsequent regularisation term that can be incorporated into arbitrary learning objectives to obtain fair predictors. These contributions address crucial gaps in the algorithmic fairness literature, and we empirically demonstrate consistent improvements against state-of-the-art techniques in balancing predictive power and fairness on real-world datasets.
translated by 谷歌翻译
While there have been a number of remarkable breakthroughs in machine learning (ML), much of the focus has been placed on model development. However, to truly realize the potential of machine learning in real-world settings, additional aspects must be considered across the ML pipeline. Data-centric AI is emerging as a unifying paradigm that could enable such reliable end-to-end pipelines. However, this remains a nascent area with no standardized framework to guide practitioners to the necessary data-centric considerations or to communicate the design of data-centric driven ML systems. To address this gap, we propose DC-Check, an actionable checklist-style framework to elicit data-centric considerations at different stages of the ML pipeline: Data, Training, Testing, and Deployment. This data-centric lens on development aims to promote thoughtfulness and transparency prior to system development. Additionally, we highlight specific data-centric AI challenges and research opportunities. DC-Check is aimed at both practitioners and researchers to guide day-to-day development. As such, to easily engage with and use DC-Check and associated resources, we provide a DC-Check companion website (https://www.vanderschaar-lab.com/dc-check/). The website will also serve as an updated resource as methods and tooling evolve over time.
translated by 谷歌翻译
基于概念的解释允许通过用户指定的概念镜头来了解深神经网络(DNN)的预测。现有方法假设说明概念的示例是在DNN潜在空间的固定方向上映射的。当这种情况下,该概念可以用指向该方向的概念激活向量(CAV)表示。在这项工作中,我们建议通过允许概念示例散布在DNN潜在空间中的不同群集中来放松这一假设。然后,每个概念都由DNN潜在空间的区域表示,该区域包括这些簇,我们称为概念激活区域(CAR)。为了使这个想法形式化,我们介绍了基于内核技巧和支持向量分类器的CAV形式主义的扩展。这种汽车形式主义产生了基于全球概念的解释和基于本地概念的特征重要性。我们证明,用径向核建造的汽车解释在潜在空间等法下是不变的。这样,汽车将相同的解释分配给具有相同几何形状的潜在空间。我们进一步证明汽车提供(1)更准确地描述了概念如何散布在DNN的潜在空间中; (2)更接近人类概念注释和(3)基于概念的特征的重要性重要性的全球解释,这些特征的重要性是有意义地相互关联的。最后,我们使用汽车表明DNN可以自主重新发现已知的科学概念,例如前列腺癌分级系统。
translated by 谷歌翻译
我们研究了在确认临床试验期间适应从给定治疗中受益的患者亚群的问题。这种自适应临床试验通常被称为自适应富集设计,已在生物统计学中进行了彻底研究,重点是构成(子)种群的有限数量的亚组(通常为两个)和少量的临时分析点。在本文中,我们旨在放宽对此类设计的经典限制,并研究如何从有关自适应和在线实验的最新机器学习文献中纳入想法,以使试验更加灵活和高效。我们发现亚种群选择问题的独特特征 - 最重要的是,(i)通常有兴趣在预算有限的情况下找到具有任何治疗益处的亚群(不一定是最大效果的单个亚组),并且(ii)(ii)在整个亚种群中只能证明有效性 - 在设计算法解决方案时引起了有趣的挑战和新的Desiderata。在这些发现的基础上,我们提出了Adaggi和Adagcpi,这是两个用于亚群构造的元算法,分别侧重于确定良好的亚组和良好的综合亚群。我们从经验上研究了它们在一系列模拟方案中的性能,并获得了对它们在不同设置的(DIS)优势的见解。
translated by 谷歌翻译
利用来自多个域的标记数据来启用没有标签的另一个域中的预测是一个重大但充满挑战的问题。为了解决这个问题,我们介绍了框架Dapdag(\ textbf {d} omain \ textbf {a}通过\ textbf {p} daptation daptation daptation \ textbf {p} erturbed \ textbf {dag}重建),并建议学习对人群进行投入的自动化统计信息给定特征并重建有向的无环图(DAG)作为辅助任务。在观察到的变量中,允许有条件的分布在由潜在环境变量$ e $领导的域变化的变量中,假定基础DAG结构不变。编码器旨在用作$ e $的推理设备,而解码器重建每个观察到的变量,以其DAG中的图形父母和推断的$ e $进行。我们以端到端的方式共同训练编码器和解码器,并对具有混合变量的合成和真实数据集进行实验。经验结果表明,重建DAG有利于近似推断。此外,我们的方法可以在预测任务中与其他基准测试实现竞争性能,具有更好的适应能力,尤其是在目标领域与源域显着不同的目标领域。
translated by 谷歌翻译
不确定性量化(UQ)对于创建值得信赖的机器学习模型至关重要。近年来,UQ方法急剧上升,可以标记可疑的例子,但是,通常不清楚这些方法确切地识别出什么。在这项工作中,我们提出了一种假设轻型方法来解释UQ模型本身。我们介绍了混淆密度矩阵 - 基于内核的错误分类密度的近似 - 并使用它将给定UQ方法识别的可疑示例分类为三类:分布外(OOD)示例,边界(BND)(BND)示例和较高分布错误分类(IDM)地区的示例。通过广泛的实验,我们阐明了现有的UQ方法,并表明了模型之间不确定性的原因有所不同。此外,我们展示了建议的框架如何利用分类的示例来提高预测性能。
translated by 谷歌翻译
封闭形式的微分方程,包括部分微分方程和高阶普通微分方程,是科学家用来建模和更好地理解自然现象的最重要工具之一。直接从数据中发现这些方程是具有挑战性的,因为它需要在数据中未观察到的各种衍生物之间建模关系(\ textit {equation-data不匹配}),并且涉及在可能的方程式的巨大空间中搜索。当前的方法对方程式的形式做出了强烈的假设,因此未能发现许多知名系统。此外,其中许多通过估计衍生物来解决方程数据不匹配,这使得它们不足以噪音且不经常采样系统。为此,我们提出了D-Cipher,这对测量工件非常健壮,可以发现新的且非常通用的微分方程类别。我们进一步设计了一种新颖的优化程序Collie,以帮助D-Cipher搜索该课程。最后,我们从经验上证明,它可以发现许多众所周知的方程,这些方程超出了当前方法的功能。
translated by 谷歌翻译
估计治疗的个性化影响是一个复杂但普遍存在的问题。为了解决这个问题,机器学习(ML)关于异质治疗效果估计的最新发展引起了许多复杂的,但不透明的工具:由于它们的灵活性,模块化和学习受限的表示的能力,尤其是神经网络,因此已成为中心对此文学。不幸的是,这种黑匣子的资产是有代价的:模型通常涉及无数的非平凡操作,因此很难理解他们所学到的知识。然而,理解这些模型可能至关重要 - 例如,在医学背景下,发现有关治疗效果的知识异质性可以在临床实践中为治疗处方提供信息。因此,在这项工作中,我们使用事后特征重要性方法来识别影响模型预测的功能。这使我们能够评估沿着先前工作中忽略的新重要维度的治疗效应估计量:我们构建了一个基准测试环境,以经验研究个性化治疗效果模型鉴定预测协变量的能力 - 确定治疗差异反应的协变量。然后,我们的基准测量环境使我们能够对不同类型的治疗效果模型的优势和劣势提供新的见解,因为我们调节了针对治疗效果估计的不同挑战 - 例如预后与预测信息的比率,潜在结果的可能非线性以及混杂的存在和类型。
translated by 谷歌翻译
随着时间的流逝,估计反事实结果有可能通过协助决策者回答“假设”问题来解锁个性化医疗保健。现有的因果推理方法通常考虑观察和治疗决策之间的定期离散时间间隔,因此无法自然地模拟不规则采样的数据,这是实践中的共同环境。为了处理任意观察模式,我们将数据解释为基础连续时间过程中的样本,并建议使用受控微分方程的数学明确地对其潜在轨迹进行建模。这导致了一种新方法,即治疗效果神经控制的微分方程(TE-CDE),该方程可在任何时间点评估潜在的结果。此外,对抗性训练用于调整时间依赖性混杂,这在纵向环境中至关重要,这是常规时间序列中未遇到的额外挑战。为了评估解决此问题的解决方案,我们提出了一个基于肿瘤生长模型的可控仿真环境,以反映出各种临床方案的一系列场景。在所有模拟场景中,TE-CDE始终优于现有方法,并具有不规则采样。
translated by 谷歌翻译